10 research outputs found

    Autonomous Movement Control of Coaxial Mobile Robot based on Aspect Ratio of Human Face for Public Relation Activity Using Stereo Thermal Camera

    Get PDF
    In recent years, robots that recognize people around them and provide guidance, information, and monitoring have been attracting attention. The mainstream of conventional human recognition technology is the method using a camera or laser range finder. However, it is difficult to recognize with a camera due to fluctuations in lighting 1), and it is often affected by the recognition environment such as misrecognition 2) with a person's leg and a chair's leg with a laser range finder. Therefore, we propose a human recognition method using a thermal camera that can visualize human heat. This study aims to realize human-following autonomous movement based on human recognition. In addition, the distance from the robot to the person is measured with a stereo thermal camera that uses two thermal cameras. A coaxial two-wheeled robot that is compact and capable of super-credit turning is used as a mobile robot. Finally, we conduct an autonomous movement experiment of a coaxial mobile robot based on human recognition by combining these. We performed human-following experiments on a coaxial two-wheeled robot based on human recognition using a stereo thermal camera and confirmed that it moves appropriately to the location where the recognized person is in multiple use cases (scenarios). However, the accuracy of distance measurement by stereo vision is inferior to that of laser measurement. It is necessary to improve it in the case of movement that requires more accuracy

    Estimation of the Shoulder Joint Angle using Brainwaves

    Get PDF
    This paper presents the angle of the shoulder joint as basic research for developing a machine interface using EEG. The raw EEG voltage signals and power density spectrum of the voltage value were used as the learning feature. Hebbian learning was used on a multilayer perceptron network for pattern classification for the estimation of joint angles   0o, 90o and 180o of the shoulder joint. Experimental results showed that it was possible to correctly classify up to 63.3% of motion using voltage values of the raw EEG signal with the neural network. Further, with selected electrodes and power density spectrum features, accuracy rose to 93.3% with more stable motion estimation

    Trend analysis and fatality causes in Kenyan roads: A review of road traffic accident data between 2015 and 2020

    No full text
    With increasing population and motorization, Kenya as well as other African countries are faced with a tragic road traffic accidents (RTA). This paper looks at 5-year (2015–2020) data downloaded from National Transport and Safety Authority (NTSA) website, to identify trends and review progress of the traffic accidents in the country. The objective is to assess the prevalence of accidents within affected groups and location to identify trends and generalized causative agency from the reported data. From literature review, research activity focused on RTA in the country is minimal compared to the social significance accidents poses. The data were extracted and classified using Latent Dirichlet Allocation, a machine learning algorithm modelled in Matlab to group reported accident briefs into general categories/topic which are closely related. Four categories were identified as leading causes of fatality in the country: Knocking down victims, hit-and-run, losing control and head on collision. The identified causes point to preventable driver’s errors which agrees with other researchers. From trend analysis, fatalities and injuries have increased by 26% and 46.5%, respectively since January 2015 to January 2020. This paper found that injuries in vulnerable road users: pedestrians, pillion passengers and motorcyclist, has seen a foldfold increment compared to 2015 data. From the discussion, urgent fine-tuning of policing to protect vulnerable road user as well as curb the overly decried driver behavior is needed. The paper recommends fine-tuning of data collection, capturing details of accident that will be useful in modeling and data analysis for future planning

    Sim–Real Mapping of an Image-Based Robot Arm Controller Using Deep Reinforcement Learning

    No full text
    Models trained with Deep Reinforcement learning (DRL) have been deployed in various areas of robotics with varying degree of success. To overcome the limitations of data gathering in the real world, DRL training utilizes simulated environments and transfers the learned policy to real-world scenarios, i.e., sim–real transfer. Simulators fail to accurately capture the entire dynamics of the real world, so simulation-trained policies often fail when applied to reality, termed a reality gap (RG). In this paper, we propose a search (mapping) algorithm that takes in real-world observation (images) and maps them to the policy-equivalent images in the simulated environment using a convolution neural network (CNN) model. The two-step training process, DRL policy and a mapping model, overcomes the RG problem with simulated data only. We evaluated the proposed system with a gripping task of a custom-made robot arm in the real world and compared the performance against a conventional DRL sim–real transfer system. The conventional system achieved a 15–57% success rate in gripping operation depending on the position of the target object while the mapping-based sim–real system achieved 100%. The experimental results demonstrated that the proposed DRL with mapping method appropriately corresponded the real world to the simulated environment, confirming that the scheme can achieve high sim–real generalization at significantly low training costs

    Sim–Real Mapping of an Image-Based Robot Arm Controller Using Deep Reinforcement Learning

    No full text
    Models trained with Deep Reinforcement learning (DRL) have been deployed in various areas of robotics with varying degree of success. To overcome the limitations of data gathering in the real world, DRL training utilizes simulated environments and transfers the learned policy to real-world scenarios, i.e., sim–real transfer. Simulators fail to accurately capture the entire dynamics of the real world, so simulation-trained policies often fail when applied to reality, termed a reality gap (RG). In this paper, we propose a search (mapping) algorithm that takes in real-world observation (images) and maps them to the policy-equivalent images in the simulated environment using a convolution neural network (CNN) model. The two-step training process, DRL policy and a mapping model, overcomes the RG problem with simulated data only. We evaluated the proposed system with a gripping task of a custom-made robot arm in the real world and compared the performance against a conventional DRL sim–real transfer system. The conventional system achieved a 15–57% success rate in gripping operation depending on the position of the target object while the mapping-based sim–real system achieved 100%. The experimental results demonstrated that the proposed DRL with mapping method appropriately corresponded the real world to the simulated environment, confirming that the scheme can achieve high sim–real generalization at significantly low training costs

    Image Presentation Method for Human Machine Interface Using Deep Learning Object Recognition and P300 Brain Wave

    No full text
    Welfare robots, as a category of robotics, seeks to improve the quality of life of the elderly and patients by availing a control mechanism to enable the participants to be self-dependent. This is achieved by using man-machine interfaces that manipulate certain external processes like feeding or communicating. This research aims to realize a man-machine interface using brainwave combined with object recognition applicable to patients with locked-in syndrome. The system utilizes a camera with pretrained object-detection system that recognizes the environment and displays the contents in an interface to solicit a choice using P300 signals. Being a camera-based system, field of view and luminance level were identified as possible influences. We designed six experiments by adapting the arrangement of stimuli (triangular or horizontal) and brightness/colour levels. The results showed that the horizontal arrangement had better accuracy than the triangular method. Further, colour was identified as a key parameter for the successful discrimination of target stimuli. From the paper, the precision of discrimination can be improved by adopting a harmonized arrangement and selecting the appropriate saturation/brightness of the interface

    Development of an Area Scan Step Length Measuring System Using a Polynomial Estimate of the Heel Cloud Point

    No full text
    Due to impaired mobility caused by aging, it is very important to employ early detection and monitoring of gait parameters to prevent the inevitable huge amount of medical cost at a later age. For gait training and potential tele-monitoring application outside clinical settings, low-cost yet highly reliable gait analysis systems are needed. This research proposes using a single LiDAR system to perform automatic gait analysis with polynomial fitting. The experimental setup for this study consists of two different walking speeds, fast walk and normal walk, along a 5-m straight line. There were ten test subjects (mean age 28, SD 5.2) who voluntarily participated in the study. We performed polynomial fitting to estimate the step length from the heel projection cloud point laser data as the subject walks forwards and compared the values with the visual inspection method. The results showed that the visual inspection method is accurate up to 6 cm while the polynomial method achieves 8 cm in the worst case (fast walking). With the accuracy difference estimated to be at most 2 cm, the polynomial method provides reliability of heel location estimation as compared with the observational gait analysis. The proposed method in this study presents an improvement accuracy of 4% as opposed to the proposed dual-laser range sensor method that reported 57.87 cm ± 10.48, an error of 10%. Meanwhile, our proposed method reported ±0.0633 m, a 6% error for normal walking

    Development of Surface EMG Game Control Interface for Persons with Upper Limb Functional Impairments

    No full text
    In recent years, surface Electromyography (sEMG) signals have been effectively applied in various fields such as control interfaces, prosthetics, and rehabilitation. We propose a neck rotation estimation from EMG and apply the signal estimate as a game control interface that can be used by people with disabilities or patients with functional impairment of the upper limb. This paper utilizes an equation estimation and a machine learning model to translate the signals into corresponding neck rotations. For testing, we designed two custom-made game scenes, a dynamic 1D object interception and a 2D maze scenery, in Unity 3D to be controlled by sEMG signal in real-time. Twenty-two (22) test subjects (mean age 27.95, std 13.24) participated in the experiment to verify the usability of the interface. From object interception, subjects reported stable control inferred from intercepted objects more than 73% accurately. In a 2D maze, a comparison of male and female subjects reported a completion time of 98.84 s. ± 50.2 and 112.75 s. ± 44.2, respectively, without a significant difference in the mean of the one-way ANOVA (p = 0.519). The results confirmed the usefulness of neck sEMG of sternocleidomastoid (SCM) as a control interface with little or no calibration required. Control models using equations indicate intuitive direction and speed control, while machine learning schemes offer a more stable directional control. Control interfaces can be applied in several areas that involve neck activities, e.g., robot control and rehabilitation, as well as game interfaces, to enable entertainment for people with disabilities
    corecore